Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using\nimage information and is a key component for autonomous vehicles and robotics. This paper\nproposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig\nwith optical flow analysis. An objective function fitted with a set of feature points is created by\nestablishing the mathematical relationship between optical flow, depth and camera ego-motion\nparameters through the cameraââ?¬â?¢s 3-dimensional motion and planar imaging model. Accordingly,\nthe six motion parameters are computed by minimizing the objective function, using the iterative\nLevenbergââ?¬â??Marquard method. One of key points for visual odometry is that the feature points\nselected for the computation should contain inliers as much as possible. In this work, the feature\npoints and their optical flows are initially detected by using the Kanadeââ?¬â??Lucasââ?¬â??Tomasi (KLT)\nalgorithm. A circle matching is followed to remove the outliers caused by the mismatching of\nthe KLT algorithm. A space position constraint is imposed to filter out the moving points from the\npoint set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is\nemployed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining\npoints are tracked to estimate the ego-motion parameters in the subsequent frames. The approach\npresented here is tested on real traffic videos and the results prove the robustness and precision of\nthe method.
Loading....